Skip to content

MON-4036: Add TelemeterClientConfig to ClusterMonitoring API#2755

Open
danielmellado wants to merge 2 commits intoopenshift:masterfrom
danielmellado:mon_4034_telemeter_client_config
Open

MON-4036: Add TelemeterClientConfig to ClusterMonitoring API#2755
danielmellado wants to merge 2 commits intoopenshift:masterfrom
danielmellado:mon_4034_telemeter_client_config

Conversation

@danielmellado
Copy link
Contributor

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

  • nodeSelector: pod scheduling to specific nodes
  • resources: compute resource requests and limits
  • tolerations: pod tolerations for scheduling
  • topologySpreadConstraints: pod distribution across topology domains

Signed-off-by: Daniel Mellado dmellado@fedoraproject.org

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 10, 2026
@openshift-ci-robot
Copy link

openshift-ci-robot commented Mar 10, 2026

@danielmellado: This pull request references MON-4036 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target either version "4.22." or "openshift-4.22.", but it targets "openshift-5.0" instead.

Details

In response to this:

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

  • nodeSelector: pod scheduling to specific nodes
  • resources: compute resource requests and limits
  • tolerations: pod tolerations for scheduling
  • topologySpreadConstraints: pod distribution across topology domains

Signed-off-by: Daniel Mellado dmellado@fedoraproject.org

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

Hello @danielmellado! Some important instructions when contributing to openshift/api:
API design plays an important part in the user experience of OpenShift and as such API PRs are subject to a high level of scrutiny to ensure they follow our best practices. If you haven't already done so, please review the OpenShift API Conventions and ensure that your proposed changes are compliant. Following these conventions will help expedite the api review process for your PR.

@coderabbitai
Copy link

coderabbitai bot commented Mar 10, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 5fcaeb52-f203-4bfc-abe3-ce10be9869e8

📥 Commits

Reviewing files that changed from the base of the PR and between 371ee72 and 49b7786.

📒 Files selected for processing (1)
  • config/v1alpha1/types_cluster_monitoring.go

📝 Walkthrough

Walkthrough

A new optional TelemeterClientConfig field was added to ClusterMonitoringSpec, defining nodeSelector, resources, tolerations, and topologySpreadConstraints with schema validations. The CRD was extended to include telemeterClientConfig. Several existing list validations were tightened by lowering multiple maxItems values from 10 to 5. Test coverage was expanded to validate creation, duplicate/empty/too-many-item errors, and update-path behaviors for TelemeterClientConfig.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning Test code lacks timeouts on k8sClient.Get() operations and assertions lack meaningful failure messages, violating test quality standards. Add Eventually(komega.Get(...), timeout) wrappers around k8sClient.Get() calls and add meaningful failure messages to all assertions.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and accurately summarizes the main change: adding TelemeterClientConfig to the ClusterMonitoring API, which is the primary objective of the changeset.
Description check ✅ Passed The description is directly related to the changeset, detailing the new TelemeterClientConfig struct and its four main features (nodeSelector, resources, tolerations, topologySpreadConstraints).
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed All test names in ClusterMonitoringConfig.yaml are static and deterministic with no dynamic values, generated pod names, timestamps, UUIDs, node names, or namespace names.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.3)

Error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented
The command is terminated due to an error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented


Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can enable review details to help with troubleshooting, context usage and more.

Enable the reviews.review_details setting to include review details such as the model used, the time taken for each step and more in the review comments.

@openshift-ci openshift-ci bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Mar 10, 2026
@qodo-code-review
Copy link

Review Summary by Qodo

Add TelemeterClientConfig to ClusterMonitoring API for pod scheduling and resources

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add TelemeterClientConfig struct to ClusterMonitoringSpec API
• Support pod scheduling via nodeSelector, tolerations, and topologySpreadConstraints
• Enable resource management with resources field for CPU/memory constraints
• Include comprehensive validation rules and test coverage for new configuration
Diagram
flowchart LR
  A["ClusterMonitoringSpec"] -->|adds field| B["TelemeterClientConfig"]
  B -->|supports| C["nodeSelector"]
  B -->|supports| D["resources"]
  B -->|supports| E["tolerations"]
  B -->|supports| F["topologySpreadConstraints"]
  C -->|controls| G["Pod Scheduling"]
  D -->|manages| H["Resource Limits"]
  E -->|handles| I["Taint Tolerance"]
  F -->|distributes| J["Topology Domains"]
Loading

Grey Divider

File Changes

1. config/v1alpha1/types_cluster_monitoring.go ✨ Enhancement +80/-0

Define TelemeterClientConfig struct with pod scheduling fields

• Added TelemeterClientConfig field to ClusterMonitoringSpec struct with comprehensive
 documentation
• Defined new TelemeterClientConfig type with four configuration fields: nodeSelector,
 resources, tolerations, and topologySpreadConstraints
• Implemented validation constraints including minProperties: 1, minItems: 1, and maxItems: 10
 for list fields
• Added detailed comments explaining each field's purpose, defaults, and constraints

config/v1alpha1/types_cluster_monitoring.go


2. config/v1alpha1/zz_generated.deepcopy.go Miscellaneous +45/-0

Generate deepcopy methods for TelemeterClientConfig

• Added DeepCopyInto method for TelemeterClientConfig with proper handling of map and slice
 fields
• Added DeepCopy method for TelemeterClientConfig to support object cloning
• Implemented deep copy logic for nodeSelector map, resources slice, tolerations slice, and
 topologySpreadConstraints slice

config/v1alpha1/zz_generated.deepcopy.go


3. config/v1alpha1/zz_generated.swagger_doc_generated.go 📝 Documentation +13/-0

Generate swagger documentation for TelemeterClientConfig

• Added swagger documentation map for TelemeterClientConfig struct
• Documented all four configuration fields with detailed descriptions of their purpose and
 constraints
• Included information about default values and validation rules in swagger documentation

config/v1alpha1/zz_generated.swagger_doc_generated.go


View more (5)
4. openapi/generated_openapi/zz_generated.openapi.go Miscellaneous +104/-1

Generate OpenAPI schema for TelemeterClientConfig

• Added OpenAPI schema definition for TelemeterClientConfig with complete property specifications
• Defined schemas for nodeSelector (map of strings), resources (array of ContainerResource),
 tolerations (array of v1.Toleration), and topologySpreadConstraints (array of
 v1.TopologySpreadConstraint)
• Included validation constraints in OpenAPI schema (minProperties, maxItems, minItems, list types)
• Added TelemeterClientConfig to dependencies list in ClusterMonitoringSpec schema

openapi/generated_openapi/zz_generated.openapi.go


5. config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml 🧪 Tests +253/-0

Add comprehensive test cases for TelemeterClientConfig

• Added 11 comprehensive test cases covering valid and invalid TelemeterClientConfig
 configurations
• Tests validate individual fields: resources, tolerations, topologySpreadConstraints, and
 nodeSelector
• Tests verify validation constraints: empty object rejection, duplicate detection, size limits, and
 resource limit ordering
• Tests ensure proper error messages for constraint violations (minProperties, maxItems, minItems)

config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml


6. config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to CRD manifest

• Added telemeterClientConfig field to CRD spec with complete YAML schema definition
• Defined all four sub-fields with detailed descriptions, validation rules, and Kubernetes list type
 annotations
• Included comprehensive property schemas for ContainerResource items with validation rules for
 resource quantities
• Added constraints for list sizes (minItems: 1, maxItems: 10) and map sizes (minProperties: 1,
 maxProperties: 10)

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml


7. config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to feature-gated CRD manifest

• Added identical telemeterClientConfig field definition to feature-gated CRD manifest
• Mirrors the changes in the main CRD manifest to maintain consistency across generated manifests
• Includes all validation rules and property schemas for the new configuration field

config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml


8. payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml ⚙️ Configuration changes +348/-0

Add TelemeterClientConfig to payload CRD manifest

• Added telemeterClientConfig field definition to payload CRD manifest
• Includes complete schema with all four configuration fields and their validation rules
• Maintains consistency with other generated CRD manifests for deployment purposes

payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Mar 10, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (1) 📎 Requirement gaps (0)

Grey Divider


Action required

1. TelemeterClientConfig MinProperties undocumented 📘 Rule violation ✓ Correctness
Description
TelemeterClientConfig has +kubebuilder:validation:MinProperties=1, but the type comment does not
document that the object must not be empty (i.e., at least one field must be set). This violates the
requirement that validation markers be fully documented, potentially confusing API consumers about
valid/invalid configurations.
Code

config/v1alpha1/types_cluster_monitoring.go[R575-581]

+// TelemeterClientConfig provides configuration options for the Telemeter Client component
+// that runs in the `openshift-monitoring` namespace. The Telemeter Client collects selected
+// monitoring metrics and forwards them to Red Hat for telemetry purposes.
+// Use this configuration to control pod scheduling and resource allocation.
+// +kubebuilder:validation:MinProperties=1
+type TelemeterClientConfig struct {
+	// nodeSelector defines the nodes on which the Pods are scheduled.
Evidence
PR Compliance ID 6 requires that every validation marker be documented in the field/type comments.
The new TelemeterClientConfig type adds +kubebuilder:validation:MinProperties=1 but its comment
only describes purpose/scheduling/resources and does not state that at least one property must be
set (non-empty object requirement).

AGENTS.md
config/v1alpha1/types_cluster_monitoring.go[575-581]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`TelemeterClientConfig` includes the validation marker `+kubebuilder:validation:MinProperties=1`, but the type-level comment does not document the resulting constraint that the object must not be empty.

## Issue Context
Compliance requires that every validation marker applied to an API field/type be fully documented in comments so users understand constraints without inspecting generated schema.

## Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[575-581]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Topology constraints not enforced 🐞 Bug ✓ Correctness
Description
The CRD schema for spec.telemeterClientConfig.topologySpreadConstraints documents constraints (e.g.,
maxSkew “0 is not allowed” and specific allowed values for whenUnsatisfiable) but does not encode
them as OpenAPI validations, so the API can accept values that contradict the schema’s own
documented rules.
Code

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[R2178-2274]

+                        maxSkew:
+                          description: |-
+                            MaxSkew describes the degree to which pods may be unevenly distributed.
+                            When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference
+                            between the number of matching pods in the target topology and the global minimum.
+                            The global minimum is the minimum number of matching pods in an eligible domain
+                            or zero if the number of eligible domains is less than MinDomains.
+                            For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
+                            labelSelector spread as 2/2/1:
+                            In this case, the global minimum is 1.
+                            | zone1 | zone2 | zone3 |
+                            |  P P  |  P P  |   P   |
+                            - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2;
+                            scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2)
+                            violate MaxSkew(1).
+                            - if MaxSkew is 2, incoming pod can be scheduled onto any zone.
+                            When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence
+                            to topologies that satisfy it.
+                            It's a required field. Default value is 1 and 0 is not allowed.
+                          format: int32
+                          type: integer
+                        minDomains:
+                          description: |-
+                            MinDomains indicates a minimum number of eligible domains.
+                            When the number of eligible domains with matching topology keys is less than minDomains,
+                            Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed.
+                            And when the number of eligible domains with matching topology keys equals or greater than minDomains,
+                            this value has no effect on scheduling.
+                            As a result, when the number of eligible domains is less than minDomains,
+                            scheduler won't schedule more than maxSkew Pods to those domains.
+                            If value is nil, the constraint behaves as if MinDomains is equal to 1.
+                            Valid values are integers greater than 0.
+                            When value is not nil, WhenUnsatisfiable must be DoNotSchedule.
+
+                            For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same
+                            labelSelector spread as 2/2/2:
+                            | zone1 | zone2 | zone3 |
+                            |  P P  |  P P  |  P P  |
+                            The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0.
+                            In this situation, new pod with the same labelSelector cannot be scheduled,
+                            because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
+                            it will violate MaxSkew.
+                          format: int32
+                          type: integer
+                        nodeAffinityPolicy:
+                          description: |-
+                            NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
+                            when calculating pod topology spread skew. Options are:
+                            - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
+                            - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
+
+                            If this value is nil, the behavior is equivalent to the Honor policy.
+                          type: string
+                        nodeTaintsPolicy:
+                          description: |-
+                            NodeTaintsPolicy indicates how we will treat node taints when calculating
+                            pod topology spread skew. Options are:
+                            - Honor: nodes without taints, along with tainted nodes for which the incoming pod
+                            has a toleration, are included.
+                            - Ignore: node taints are ignored. All nodes are included.
+
+                            If this value is nil, the behavior is equivalent to the Ignore policy.
+                          type: string
+                        topologyKey:
+                          description: |-
+                            TopologyKey is the key of node labels. Nodes that have a label with this key
+                            and identical values are considered to be in the same topology.
+                            We consider each <key, value> as a "bucket", and try to put balanced number
+                            of pods into each bucket.
+                            We define a domain as a particular instance of a topology.
+                            Also, we define an eligible domain as a domain whose nodes meet the requirements of
+                            nodeAffinityPolicy and nodeTaintsPolicy.
+                            e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
+                            And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
+                            It's a required field.
+                          type: string
+                        whenUnsatisfiable:
+                          description: |-
+                            WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy
+                            the spread constraint.
+                            - DoNotSchedule (default) tells the scheduler not to schedule it.
+                            - ScheduleAnyway tells the scheduler to schedule the pod in any location,
+                              but giving higher precedence to topologies that would help reduce the
+                              skew.
+                            A constraint is considered "Unsatisfiable" for an incoming pod
+                            if and only if every possible node assignment for that pod would violate
+                            "MaxSkew" on some topology.
+                            For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same
+                            labelSelector spread as 3/1/1:
+                            | zone1 | zone2 | zone3 |
+                            | P P P |   P   |   P   |
+                            If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled
+                            to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies
+                            MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler
+                            won't make it *more* imbalanced.
+                            It's a required field.
+                          type: string
Evidence
In the generated CRD manifest, maxSkew is only typed as an integer (no minimum constraint) even
though its description states “0 is not allowed”, and whenUnsatisfiable is only typed as a string
(no enum constraint) despite its description listing specific allowed values. The API type comment
states this field maps directly to the Pod spec field, so having the CRD accept values outside the
documented constraints undermines the API contract and shifts validation burden to later
stages/consumers.

config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2178-2199]
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2254-2274]
config/v1alpha1/types_cluster_monitoring.go[628-646]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The generated CRD schema for `spec.telemeterClientConfig.topologySpreadConstraints` documents constraints (e.g., `maxSkew` “0 is not allowed” and specific allowed values for `whenUnsatisfiable`) but does not enforce them via OpenAPI/CEL validations.

### Issue Context
This means the API server may accept values that contradict the schema’s own documented rules, leaving validation to later consumers.

### Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[628-646]
- config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml[624-832]
- config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml[2095-2287]

### Implementation notes
- Add `+kubebuilder:validation:XValidation` rules on `TopologySpreadConstraints` to enforce:
 - `maxSkew &gt; 0` for every element
 - `whenUnsatisfiable` is one of `DoNotSchedule` / `ScheduleAnyway`
- Add a CRD schema test case that attempts to create a resource with `maxSkew: 0` and/or an invalid `whenUnsatisfiable` and asserts the expected admission error substring.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign everettraven for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Comment on lines +575 to +581
// TelemeterClientConfig provides configuration options for the Telemeter Client component
// that runs in the `openshift-monitoring` namespace. The Telemeter Client collects selected
// monitoring metrics and forwards them to Red Hat for telemetry purposes.
// Use this configuration to control pod scheduling and resource allocation.
// +kubebuilder:validation:MinProperties=1
type TelemeterClientConfig struct {
// nodeSelector defines the nodes on which the Pods are scheduled.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. telemeterclientconfig minproperties undocumented 📘 Rule violation ✓ Correctness

TelemeterClientConfig has +kubebuilder:validation:MinProperties=1, but the type comment does not
document that the object must not be empty (i.e., at least one field must be set). This violates the
requirement that validation markers be fully documented, potentially confusing API consumers about
valid/invalid configurations.
Agent Prompt
## Issue description
`TelemeterClientConfig` includes the validation marker `+kubebuilder:validation:MinProperties=1`, but the type-level comment does not document the resulting constraint that the object must not be empty.

## Issue Context
Compliance requires that every validation marker applied to an API field/type be fully documented in comments so users understand constraints without inspecting generated schema.

## Fix Focus Areas
- config/v1alpha1/types_cluster_monitoring.go[575-581]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

580-832: Add failure cases for nodeSelector and tolerations bounds.

The new schema also enforces min/maxProperties on nodeSelector and min/maxItems on tolerations, but this block only exercises negative paths for resources and topologySpreadConstraints. A couple of explicit failures here would make regressions in those two branches much harder to miss.

Example additions
+    - name: Should reject TelemeterClientConfig with empty nodeSelector
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          telemeterClientConfig:
+            nodeSelector: {}
+      expectedError: 'spec.telemeterClientConfig.nodeSelector: Invalid value: 0: spec.telemeterClientConfig.nodeSelector in body should have at least 1 properties'
+
+    - name: Should reject TelemeterClientConfig with empty tolerations array
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          telemeterClientConfig:
+            tolerations: []
+      expectedError: 'spec.telemeterClientConfig.tolerations: Invalid value: 0: spec.telemeterClientConfig.tolerations in body should have at least 1 items'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 580 - 832, Add negative test cases exercising nodeSelector and
tolerations bounds: include tests named e.g. "Should reject
TelemeterClientConfig with empty nodeSelector" and "Should reject
TelemeterClientConfig with too many nodeSelector properties" that set
spec.telemeterClientConfig.nodeSelector to {} and to an object exceeding the
schema maxProperties respectively, and include tests "Should reject
TelemeterClientConfig with empty tolerations array" (set tolerations: []) and
"Should reject TelemeterClientConfig with too many tolerations" (provide more
than the allowed maxItems). Ensure each test uses the same ClusterMonitoring
YAML structure as other cases and sets expectedError strings matching the schema
validation messages for min/maxProperties and min/maxItems on nodeSelector and
tolerations so failures are asserted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 580-832: Add negative test cases exercising nodeSelector and
tolerations bounds: include tests named e.g. "Should reject
TelemeterClientConfig with empty nodeSelector" and "Should reject
TelemeterClientConfig with too many nodeSelector properties" that set
spec.telemeterClientConfig.nodeSelector to {} and to an object exceeding the
schema maxProperties respectively, and include tests "Should reject
TelemeterClientConfig with empty tolerations array" (set tolerations: []) and
"Should reject TelemeterClientConfig with too many tolerations" (provide more
than the allowed maxItems). Ensure each test uses the same ClusterMonitoring
YAML structure as other cases and sets expectedError strings matching the schema
validation messages for min/maxProperties and min/maxItems on nodeSelector and
tolerations so failures are asserted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 897b5361-c81f-468e-bb44-621465732eb1

📥 Commits

Reviewing files that changed from the base of the PR and between 7127010 and 40f8860.

⛔ Files ignored due to path filters (5)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 11, 2026
saschagrunert

This comment was marked as outdated.

Copy link
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new TelemeterClientConfig struct follows the established pattern from other component configs in this file. Markers and godoc are mostly well done.

Main issue: the PR removes OpenShiftStateMetricsConfig (merged to master one day prior via MON-4033, 4cf5fe1) and replaces it with TelemeterClientConfig. After rebasing on current master, both fields need to coexist.

Also needs additional test coverage for nodeSelector and tolerations boundary validation.

@danielmellado
Copy link
Contributor Author

While rebasing, the CRD hit the Kubernetes CEL validation cost budget (StaticEstimatedCRDCostLimit). The quantity() CEL function has a high fixed estimated cost per invocation, and with 6 structs in ClusterMonitoringSpec each embedding []ContainerResource, the cumulative cost exceeded the limit by ~13%.

Fixed by reducing MaxItems on all resources []ContainerResource fields from 10 to 5 (sufficient for practical use — cpu, memory, hugepages variants).I also added an explanatory comment on the ContainerResource type.

@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 40f8860 to 2f6d3d1 Compare March 12, 2026 14:19
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 12, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml (1)

37-60: Add the missing prometheusOperatorConfig.resources max-items regression test.

These cases cover the new MaxItems=5 boundary for the other touched resources lists, but spec.prometheusOperatorConfig.resources was tightened in the API too and is still untested here. One rejection case for 6 items would keep all changed validation paths covered.

Proposed test case
+    - name: Should reject PrometheusOperatorConfig with more than 5 resource items
+      initial: |
+        apiVersion: config.openshift.io/v1alpha1
+        kind: ClusterMonitoring
+        spec:
+          prometheusOperatorConfig:
+            resources:
+              - name: "cpu"
+                request: "100m"
+              - name: "memory"
+                request: "64Mi"
+              - name: "hugepages-2Mi"
+                request: "32Mi"
+              - name: "hugepages-1Gi"
+                request: "1Gi"
+              - name: "ephemeral-storage"
+                request: "1Gi"
+              - name: "nvidia.com/gpu"
+                request: "1"
+      expectedError: 'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5 items'

Also applies to: 312-333, 462-481, 664-683, 933-952

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`
around lines 37 - 60, Add a regression test that mirrors the alertmanagerConfig
case but targets spec.prometheusOperatorConfig.resources to assert MaxItems=5
enforcement: create a new test entry named like "Should reject
PrometheusOperatorConfig with more than 5 items" with initial YAML containing
spec.prometheusOperatorConfig.resources listing 6 distinct resource objects
(e.g., cpu, memory, hugepages-2Mi, hugepages-1Gi, ephemeral-storage,
nvidia.com/gpu) and set expectedError to
'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5
items' so the tightened validation path for prometheusOperatorConfig.resources
is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/v1alpha1/types_cluster_monitoring.go`:
- Around line 211-218: The MaxItems for several []ContainerResource "resources"
lists was tightened from 10 to 5, which is schema-breaking; revert the
+kubebuilder:validation:MaxItems annotation (or change it back to 10) for the
"resources" list annotations so existing ClusterMonitoring CRs with 6–10 entries
continue to validate, specifically update the +kubebuilder:validation:MaxItems
on the "resources" list annotations associated with the ContainerResource lists
(the "resources" field) in types_cluster_monitoring.go so they match the
original limit (10) or remove the constraint until a migration is provided.

---

Nitpick comments:
In
`@config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml`:
- Around line 37-60: Add a regression test that mirrors the alertmanagerConfig
case but targets spec.prometheusOperatorConfig.resources to assert MaxItems=5
enforcement: create a new test entry named like "Should reject
PrometheusOperatorConfig with more than 5 items" with initial YAML containing
spec.prometheusOperatorConfig.resources listing 6 distinct resource objects
(e.g., cpu, memory, hugepages-2Mi, hugepages-1Gi, ephemeral-storage,
nvidia.com/gpu) and set expectedError to
'spec.prometheusOperatorConfig.resources: Too many: 6: must have at most 5
items' so the tightened validation path for prometheusOperatorConfig.resources
is covered.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 0f07ec4c-1986-4bc9-917b-36a27aee9952

📥 Commits

Reviewing files that changed from the base of the PR and between 40f8860 and 2f6d3d1.

⛔ Files ignored due to path filters (5)
  • config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings.crd.yaml is excluded by !**/zz_generated.crd-manifests/*
  • config/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • config/v1alpha1/zz_generated.swagger_doc_generated.go is excluded by !**/zz_generated*
  • openapi/generated_openapi/zz_generated.openapi.go is excluded by !openapi/**, !**/zz_generated*
📒 Files selected for processing (3)
  • config/v1alpha1/tests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yaml
  • config/v1alpha1/types_cluster_monitoring.go
  • payload-manifests/crds/0000_10_config-operator_01_clustermonitorings.crd.yaml

Comment on lines +211 to 218
// Maximum length for this list is 5.
// Minimum length for this list is 1.
// Each resource name must be unique within this list.
// +optional
// +listType=map
// +listMapKey=name
// +kubebuilder:validation:MaxItems=10
// +kubebuilder:validation:MaxItems=5
// +kubebuilder:validation:MinItems=1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid tightening existing resources lists without a migration plan.

Lines 211-218, 422-429, 493-500, 557-564, and 617-624 all shrink already-shipped []ContainerResource fields from 10 items to 5. That is a schema-breaking contraction: any existing ClusterMonitoring object using 6-10 entries in one of those fields will stop passing validation on later updates once this CRD is installed. Please either preserve backward compatibility for the existing fields or document/implement a migration path before merging.

Also applies to: 422-429, 493-500, 557-564, 617-624, 690-697

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/v1alpha1/types_cluster_monitoring.go` around lines 211 - 218, The
MaxItems for several []ContainerResource "resources" lists was tightened from 10
to 5, which is schema-breaking; revert the +kubebuilder:validation:MaxItems
annotation (or change it back to 10) for the "resources" list annotations so
existing ClusterMonitoring CRs with 6–10 entries continue to validate,
specifically update the +kubebuilder:validation:MaxItems on the "resources" list
annotations associated with the ContainerResource lists (the "resources" field)
in types_cluster_monitoring.go so they match the original limit (10) or remove
the constraint until a migration is provided.

//
// When omitted, this means the user has no opinion and the platform is left
// to choose reasonable defaults. These defaults are subject to change over time.
// The current default is an empty list.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another consistency nit:

Suggested change
// The current default is an empty list.
// Default is empty list.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, thanks!

Migrate the telemeter-client configmap settings to a CRD field within
ClusterMonitoringSpec in config/v1alpha1. The new TelemeterClientConfig
struct supports:

- nodeSelector: pod scheduling to specific nodes
- resources: compute resource requests and limits
- tolerations: pod tolerations for scheduling
- topologySpreadConstraints: pod distribution across topology domains

Signed-off-by: Daniel Mellado <dmellado@fedoraproject.org>
@danielmellado danielmellado force-pushed the mon_4034_telemeter_client_config branch from 2f6d3d1 to 371ee72 Compare March 12, 2026 15:16
@danielmellado
Copy link
Contributor Author

The failure on verify-crdifyis to be expected as per the CEL fix

		(v1alpha1) ^.spec.prometheusOperatorAdmissionWebhookConfig.resources - maxItems: 
              
			maximum decreased : 10 -> 5
+ echo This verifier checks all files that have changed. In some cases you may have changed or renamed a file that already contained api violations, but you are not introducing a new violation. In such cases it is appropriate to /override the failing CI job.

Co-authored-by: Sascha Grunert <sgrunert@redhat.com>
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 13, 2026
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@danielmellado
Copy link
Contributor Author

Thanks, I'm committing this, rebasing and I'll push the updated version ;)

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 13, 2026

@danielmellado: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/unit 49b7786 link true /test unit
ci/prow/verify-deps 49b7786 link true /test verify-deps
ci/prow/minor-images 49b7786 link true /test minor-images
ci/prow/build 49b7786 link true /test build
ci/prow/integration 49b7786 link true /test integration
ci/prow/verify-crdify 49b7786 link true /test verify-crdify
ci/prow/okd-scos-images 49b7786 link true /test okd-scos-images
ci/prow/verify-client-go 49b7786 link true /test verify-client-go
ci/prow/lint 49b7786 link true /test lint
ci/prow/verify-feature-promotion 49b7786 link true /test verify-feature-promotion
ci/prow/images 49b7786 link true /test images
ci/prow/verify-crd-schema 49b7786 link true /test verify-crd-schema
ci/prow/verify 49b7786 link true /test verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants